17 research outputs found

    A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination

    Get PDF
    The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous recomputation based rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multi-bounce tensor of polynomial coefficients, that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the low-frequency nature of indirect light. We use a high-quality representation for the BRDFs at the first bounce from the eye, and lower-frequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including color-bleeding, near-field object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the high-dimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes, with interactive feedback rendered with global illumination

    Real-time BRDF editing in complex lighting

    No full text
    Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a realtime rendering system that enables interactive edits of BRDFs, as rendered in their final placement on objects in a static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows to diffuse shading and soft shadows) are rendered using a precomputation-based approach. Inspired by real-time relighting methods, we create a linear system that fixes lighting and view to allow real-time BRDF manipulation. In order to linearize the image’s response to BRDF parameters, we develop an intermediate curve-based representation, which also reduces the rendering and precomputation operations to 1D while maintaining accuracy for a very general class of BRDFs. Our system can be used to edit complex analytic BRDFs (including anisotropic models), as well as measured reflectance data. We improve on the standard precomputed radiance transfer (PRT) rendering computation by introducing an incremental rendering algorithm that takes advantage of frame-to-frame coherence. We show that it is possible to render reference-quality images while only updating 10 % of the data at each frame, sustaining frame-rates of 25-30fps.

    Real-time BRDF editing in complex lighting

    No full text
    Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a realtime rendering system that enables interactive edits of BRDFs, as rendered in their final placement on objects in a static scene, lit by direct, complex illumination. All-frequency effects (ranging from near-mirror reflections and hard shadows to diffuse shading and soft shadows) are rendered using a precomputation-based approach. Inspired by real-time relighting methods, we create a linear system that fixes lighting and view to allow real-time BRDF manipulation. In order to linearize the image’s response to BRDF parameters, we develop an intermediate curve-based representation, which also reduces the rendering and precomputation operations to 1D while maintaining accuracy for a very general class of BRDFs. Our system can be used to edit complex analytic BRDFs (including anisotropic models), as well as measured reflectance data. We improve on the standard precomputed radiance transfer (PRT) rendering computation by introducing an incremental rendering algorithm that takes advantage of frame-to-frame coherence. We show that it is possible to render reference-quality images while only updating 10 % of the data at each frame, sustaining frame-rates of 25-30fps

    Efficient Shadows for Sampled Environment Maps

    No full text
    This paper addresses the problem of efficiently calculating shadows from environment maps in the context of ray-tracing. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction. We show that coherence in both spatial and angular domains can be used to reduce the number of shadow-rays that need to be traced. Specifically, we use a coarseto-fine evaluation of the image, predicting visibility by reusing visibility calculations from 4 nearby pixels that have already been evaluated. This simple method allows us to explicitly mark regions of uncertainty in the prediction. By only tracing rays in these and neighboring directions, we are able to reduce the number of shadow-rays traced by up to a factor of 20 while maintaining error rates below 0.01%. For many scenes, our algorithm can add shadowing from hundreds of lights at only twice the cost of rendering without shadows. Sample source code is available online. 60 million shadow rays standard ray-tracing approximately equal quality 6 million shadow rays our method approximately equal work 7 million shadow rays standard ray-tracing Figure 1: A scene illuminated by a sampled environment map. The left image is rendered in POV-Ray using shadow-ray tracing to determine light-source visibility for the 400 lights in the scene, as sampled from the environment according to [Agarwal et al. 2003]. The center image uses our Coherence-Based Sampling to render the same scene with a 90 % reduction in shadow-rays traced. The right image is again traced in POV Ray, but with a reduced sampling of the environment map (50 lights, again using [Agarwal et al. 2003]) to approximate the number of shadow-rays traced using our method. Note that the lower sampling of the environment map in the right image does not faithfully reproduce the soft shadows.
    corecore